iT邦幫忙

第 12 屆 iThome 鐵人賽

DAY 24
1
AI & Data

30天只學U-net系列 第 24

[day-24] U-net Data Augmentation

  • 分享至 

  • xImage
  •  

前言

Data Augmentation

Data augmentation is essential to teach the network the desired invariance and robustness properties, when only few training samples are available.

影像增量的3個優點

  1. invariance 變異性
  2. robustness 健碩性/魯棒性[1]
  3. few training samples 少樣本訓練

其中robustness是(應用的模型應用的關鍵環境/光線變化等)

In case of microscopical images we primarily need shift and rotation invariance as well as
robustness to deformations and gray value variations.

這一段要我們去思考,這種case需要注意哪一種的 robustness,以顯微鏡細胞圖的例子:

  1. 旋轉
  2. gray value的不確定

Especially random elastic deformations of the training samples seem to be the key concept to train a segmentation network with very few annotated images. We generate smooth deformations using random displacement vectors on a coarse 3 by 3 grid.

這邊提出彈性變形是在少樣本分類的關鍵。所以用了randome displacement在3X3的網格上。

The displacements are sampled from a Gaussian distribution with 10 pixels standard
deviation. Per-pixel displacements are then computed using bicubic interpolation.

說明如何彈性變形。

Drop-out layers at the end of the contracting path perform further implicit data augmentation.

這邊說明的加入 drop out layer 目的是做data augmentation

Conclusion

今天又看到新的東西了,drop out 跟 data augmentation 竟然有關連性,明天我們看看到底是怎麼回事吧~~

Reference

[1]Model-Based Robust Deep Learning
[2]Dropout as data augmentation


上一篇
[day-23] U-net Training 細節 (5) - initial weight
下一篇
[day-25] U-net Experiments (1) - rule
系列文
30天只學U-net30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言